4 research outputs found
An Empirical Study of Leveraging Knowledge Distillation for Compressing Multilingual Neural Machine Translation Models
Knowledge distillation (KD) is a well-known method for compressing neural
models. However, works focusing on distilling knowledge from large multilingual
neural machine translation (MNMT) models into smaller ones are practically
nonexistent, despite the popularity and superiority of MNMT. This paper bridges
this gap by presenting an empirical investigation of knowledge distillation for
compressing MNMT models. We take Indic to English translation as a case study
and demonstrate that commonly used language-agnostic and language-aware KD
approaches yield models that are 4-5x smaller but also suffer from performance
drops of up to 3.5 BLEU. To mitigate this, we then experiment with design
considerations such as shallower versus deeper models, heavy parameter sharing,
multi-stage training, and adapters. We observe that deeper compact models tend
to be as good as shallower non-compact ones, and that fine-tuning a distilled
model on a High-Quality subset slightly boosts translation quality. Overall, we
conclude that compressing MNMT models via KD is challenging, indicating immense
scope for further research.Comment: accepted at EAMT 202
Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation?
Large Language Models (LLMs) have demonstrated impressive performance on
Natural Language Processing (NLP) tasks, such as Question Answering,
Summarization, and Classification. The use of LLMs as evaluators, that can rank
or score the output of other models (usually LLMs) has become increasingly
popular, due to the limitations of current evaluation techniques including the
lack of appropriate benchmarks, metrics, cost, and access to human annotators.
While LLMs are capable of handling approximately 100 languages, the majority of
languages beyond the top 20 lack systematic evaluation across various tasks,
metrics, and benchmarks. This creates an urgent need to scale up multilingual
evaluation to ensure a precise understanding of LLM performance across diverse
languages. LLM-based evaluators seem like the perfect solution to this problem,
as they do not require human annotators, human-created references, or
benchmarks and can theoretically be used to evaluate any language covered by
the LLM. In this paper, we investigate whether LLM-based evaluators can help
scale up multilingual evaluation. Specifically, we calibrate LLM-based
evaluation against 20k human judgments of five metrics across three
text-generation tasks in eight languages. Our findings indicate that LLM-based
evaluators may exhibit bias towards higher scores and should be used with
caution and should always be calibrated with a dataset of native speaker
judgments, particularly in low-resource and non-Latin script languages
MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks
Recently, there has been a rapid advancement in research on Large Language
Models (LLMs), resulting in significant progress in several Natural Language
Processing (NLP) tasks. Consequently, there has been a surge in LLM evaluation
research to comprehend the models' capabilities and limitations. However, much
of this research has been confined to the English language, leaving LLM
building and evaluation for non-English languages relatively unexplored. There
has been an introduction of several new LLMs, necessitating their evaluation on
non-English languages. This study aims to expand our MEGA benchmarking suite by
including six new datasets to form the MEGAVERSE benchmark. The benchmark
comprises 22 datasets covering 81 languages, including low-resource African
languages. We evaluate several state-of-the-art LLMs like GPT-3.5-Turbo, GPT4,
PaLM2, and Llama2 on the MEGAVERSE datasets. Additionally, we include two
multimodal datasets in the benchmark and assess the performance of the
LLaVa-v1.5 model. Our experiments suggest that GPT4 and PaLM2 outperform the
Llama models on various tasks, notably on low-resource languages, with GPT4
outperforming PaLM2 on more datasets than vice versa. However, issues such as
data contamination must be addressed to obtain an accurate assessment of LLM
performance on non-English languages.Comment: 23 pages, 30 figures and 1 tabl
IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages
India has a rich linguistic landscape with languages from 4 major language
families spoken by over a billion people. 22 of these languages are listed in
the Constitution of India (referred to as scheduled languages) are the focus of
this work. Given the linguistic diversity, high-quality and accessible Machine
Translation (MT) systems are essential in a country like India. Prior to this
work, there was (i) no parallel training data spanning all the 22 languages,
(ii) no robust benchmarks covering all these languages and containing content
relevant to India, and (iii) no existing translation models which support all
the 22 scheduled languages of India. In this work, we aim to address this gap
by focusing on the missing pieces required for enabling wide, easy, and open
access to good machine translation systems for all 22 scheduled Indian
languages. We identify four key areas of improvement: curating and creating
larger training datasets, creating diverse and high-quality benchmarks,
training multilingual models, and releasing models with open access. Our first
contribution is the release of the Bharat Parallel Corpus Collection (BPCC),
the largest publicly available parallel corpora for Indic languages. BPCC
contains a total of 230M bitext pairs, of which a total of 126M were newly
added, including 644K manually translated sentence pairs created as part of
this work. Our second contribution is the release of the first n-way parallel
benchmark covering all 22 Indian languages, featuring diverse domains,
Indian-origin content, and source-original test sets. Next, we present
IndicTrans2, the first model to support all 22 languages, surpassing existing
models on multiple existing and new benchmarks created as a part of this work.
Lastly, to promote accessibility and collaboration, we release our models and
associated data with permissive licenses at
https://github.com/ai4bharat/IndicTrans2